Differentially Private Speaker Anonymization

نویسندگان

چکیده

Sharing real-world speech utterances is key to the training and deployment of voice-based services. However, it also raises privacy risks as contains a wealth personal data. Speaker anonymization aims remove speaker information from utterance while leaving its linguistic prosodic attributes intact. State-of-the-art techniques operate by disentangling (represented via embedding) these re-synthesizing based on embedding another speaker. Prior research in community has shown that often provides brittle protection, even less so any provable guarantee. In this work, we show disentanglement indeed not perfect: still contain information. We introducing differentially private feature extractors an autoencoder automatic recognizer, respectively, trained using noise layers. plug state-of-the-art pipeline generate, for first time, with upper bound they contain. evaluate empirically utility resulting our approach LibriSpeech data set. Experimental results generated retain very high recognition inference, being much better protected against strong adversaries who leverage full knowledge process try infer identity.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Differentially Private Local Electricity Markets

Privacy-preserving electricity markets have a key role in steering customers towards participation in local electricity markets by guarantying to protect their sensitive information. Moreover, these markets make it possible to statically release and share the market outputs for social good. This paper aims to design a market for local energy communities by implementing Differential Privacy (DP)...

متن کامل

Differentially Private Variational Dropout

Deep neural networks with their large number of parameters are highly flexible learning systems. The high flexibility in such networks brings with some serious problems such as overfitting, and regularization is used to address this problem. A currently popular and effective regularization technique for controlling the overfitting is dropout. Often, large data collections required for neural ne...

متن کامل

Differentially Private Rank Aggregation

Given a collection of rankings of a set of items, rank aggregation seeks to compute a ranking that can serve as a single best representative of the collection. Rank aggregation is a well-studied problem and a number of effective algorithmic solutions have been proposed in the literature. However, when individuals are asked to contribute a ranking, they may be concerned that their personal prefe...

متن کامل

Differentially Private Policy Evaluation

We present the first differentially private algorithms for reinforcement learning, which apply to the task of evaluating a fixed policy. We establish two approaches for achieving differential privacy, provide a theoretical analysis of the privacy and utility of the two algorithms, and show promising results on simple empirical examples.

متن کامل

Differentially Private Dropout

Large data collections required for the training of neural networks often contain sensitive information such as the medical histories of patients, and the privacy of the training data must be preserved. In this paper, we introduce a dropout technique that provides an elegant Bayesian interpretation to dropout, and show that the intrinsic noise added, with the primary goal of regularization, can...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Proceedings on Privacy Enhancing Technologies

سال: 2023

ISSN: ['2299-0984']

DOI: https://doi.org/10.56553/popets-2023-0007